# Mathematical Logic Optimization
Openthaigpt R1 32b Instruct
Other
OpenThaiGPT R1 32b is a 32-billion-parameter Thai reasoning model that excels in Thai mathematical, logical, and code reasoning tasks, outperforming larger-scale models.
Large Language Model
Transformers Supports Multiple Languages

O
openthaigpt
403
3
Qwen2.5 MOE 2X1.5B DeepSeek Uncensored Censored 4B Gguf
Apache-2.0
This is a Qwen2.5 MOE (Mixture of Experts) model, composed of two Qwen 2.5 DeepSeek (censored/regular and uncensored) 1.5B models, forming a 4B model where the uncensored version of DeepSeek Qwen 2.5 1.5B dominates the model's behavior.
Large Language Model Supports Multiple Languages
Q
DavidAU
678
5
Phi 3 Mini 4k Instruct Gguf
MIT
Phi-3-Mini-4K-Instruct is a lightweight, cutting-edge open-source model with 3.8 billion parameters, focusing on high quality and inference-intensive features, suitable for commercial and research use in English.
Large Language Model Supports Multiple Languages
P
microsoft
20.51k
488
Una Cybertron 7b V3 OMA
Apache-2.0
UNA-cybertron-7b-v3 is a 7B parameter large language model developed by the OMA team, trained using UNA (Unified Neural Alignment) technology, excelling in mathematics, logic, and reasoning.
Large Language Model
Transformers

U
fblgit
103
14
Featured Recommended AI Models